GAN vocoders are currently one of the state-of-the-art methods for building high-quality neural waveform generative models. However, most of their architectures require dozens of billion floating-point operations per second (GFLOPS) to generate speech waveforms in samplewise manner. This makes GAN vocoders still challenging to run on normal CPUs without accelerators or parallel computers. In this work, we propose a new architecture for GAN vocoders that mainly depends on recurrent and fully-connected networks to directly generate the time domain signal in framewise manner. This results in considerable reduction of the computational cost and enables very fast generation on both GPUs and low-complexity CPUs. Experimental results show that our Framewise WaveGAN vocoder achieves significantly higher quality than auto-regressive maximum-likelihood vocoders such as LPCNet at a very low complexity of 1.2 GFLOPS. This makes GAN vocoders more practical on edge and low-power devices.
translated by 谷歌翻译
Recent research has shown remarkable performance in leveraging multiple extraneous conditional and non-mutually exclusive semantic concepts for sound source separation, allowing the flexibility to extract a given target source based on multiple different queries. In this work, we propose a new optimal condition training (OCT) method for single-channel target source separation, based on greedy parameter updates using the highest performing condition among equivalent conditions associated with a given target source. Our experiments show that the complementary information carried by the diverse semantic concepts significantly helps to disentangle and isolate sources of interest much more efficiently compared to single-conditioned models. Moreover, we propose a variation of OCT with condition refinement, in which an initial conditional vector is adapted to the given mixture and transformed to a more amenable representation for target source extraction. We showcase the effectiveness of OCT on diverse source separation experiments where it improves upon permutation invariant models with oracle assignment and obtains state-of-the-art performance in the more challenging task of text-based source separation, outperforming even dedicated text-only conditioned models.
translated by 谷歌翻译
In this paper, we work on a sound recognition system that continually incorporates new sound classes. Our main goal is to develop a framework where the model can be updated without relying on labeled data. For this purpose, we propose adopting representation learning, where an encoder is trained using unlabeled data. This learning framework enables the study and implementation of a practically relevant use case where only a small amount of the labels is available in a continual learning context. We also make the empirical observation that a similarity-based representation learning method within this framework is robust to forgetting even if no explicit mechanism against forgetting is employed. We show that this approach obtains similar performance compared to several distillation-based continual learning methods when employed on self-supervised representation learning methods.
translated by 谷歌翻译
我们提出混音,这是一种简单而有效的自我监督方法,用于训练语音增强,而无需单个孤立的内域语音或噪声波形。我们的方法克服了以前的方法的局限性,这些方法使它们取决于清洁内域目标信号,因此,对火车和测试样品之间的任何域不匹配敏感。混音基于连续的自我训练方案,在该方案中,预先训练的教师模型涉及域外数据渗透者估计的伪靶信号,用于构域混合物。然后,通过将估计的清洁和噪声信号置换并将它们重新混合在一起,我们生成了一组新的自举混合物和相应的假目标,用于训练学生网络。反之亦然,教师使用最新学生模型的更新参数定期完善其估计。多个语音增强数据集和任务的实验结果不仅显示了我们方法比先前方法的优越性,而且还展示了混音可以与任何分离模型结合在一起,还可以应用于任何半监督和无监督的域适应任务。我们的分析与经验证据相结合,阐明了我们的自我训练方案的内部功能,其中学生模型在观察严重降级的伪靶标的情况下不断获得更好的性能。
translated by 谷歌翻译
Non-linear state-space models, also known as general hidden Markov models, are ubiquitous in statistical machine learning, being the most classical generative models for serial data and sequences in general. The particle-based, rapid incremental smoother PaRIS is a sequential Monte Carlo (SMC) technique allowing for efficient online approximation of expectations of additive functionals under the smoothing distribution in these models. Such expectations appear naturally in several learning contexts, such as likelihood estimation (MLE) and Markov score climbing (MSC). PARIS has linear computational complexity, limited memory requirements and comes with non-asymptotic bounds, convergence results and stability guarantees. Still, being based on self-normalised importance sampling, the PaRIS estimator is biased. Our first contribution is to design a novel additive smoothing algorithm, the Parisian particle Gibbs PPG sampler, which can be viewed as a PaRIS algorithm driven by conditional SMC moves, resulting in bias-reduced estimates of the targeted quantities. We substantiate the PPG algorithm with theoretical results, including new bounds on bias and variance as well as deviation inequalities. Our second contribution is to apply PPG in a learning framework, covering MLE and MSC as special examples. In this context, we establish, under standard assumptions, non-asymptotic bounds highlighting the value of bias reduction and the implicit Rao--Blackwellization of PPG. These are the first non-asymptotic results of this kind in this setting. We illustrate our theoretical results with numerical experiments supporting our claims.
translated by 谷歌翻译
In this paper, we present a novel method for phoneme-level prosody control of F0 and duration using intuitive discrete labels. We propose an unsupervised prosodic clustering process which is used to discretize phoneme-level F0 and duration features from a multispeaker speech dataset. These features are fed as an input sequence of prosodic labels to a prosody encoder module which augments an autoregressive attention-based text-to-speech model. We utilize various methods in order to improve prosodic control range and coverage, such as augmentation, F0 normalization, balanced clustering for duration and speaker-independent clustering. The final model enables fine-grained phoneme-level prosody control for all speakers contained in the training set, while maintaining the speaker identity. Instead of relying on reference utterances for inference, we introduce a prior prosody encoder which learns the style of each speaker and enables speech synthesis without the requirement of reference audio. We also fine-tune the multispeaker model to unseen speakers with limited amounts of data, as a realistic application scenario and show that the prosody control capabilities are maintained, verifying that the speaker-independent prosodic clustering is effective. Experimental results show that the model has high output speech quality and that the proposed method allows efficient prosody control within each speaker's range despite the variability that a multispeaker setting introduces.
translated by 谷歌翻译
物理知识的神经网络(PINN)在解决涉及部分微分方程的前进和反问题方面表现出了希望。尽管最近在扩展PINN可以解决的问题类别方面取得了进展,但大多数现有用例都涉及简单的几何域。迄今为止,还没有明确的方法来告知Pinns有关解决问题的域拓扑。在这项工作中,我们提出了一种基于拉普拉斯 - 贝特拉米操作员的特征函数的PINN的新型位置编码机制。该技术允许为代表给定对象几何形状的神经网络创建一个输入空间。我们近似具有有限元素的偏微分方程的特征函数以及涉及的操作员。我们对所提出的方法进行了广泛的测试和比较,以复杂形状(例如线圈,散热器和兔子),具有不同的物理学,例如二基核方程和传热。我们还研究了我们方法对所使用的本征函数数量的敏感性,以及用于本征函数和基础操作员的离散化。我们的结果表明,在传统的PINN无法产生有意义的解决方案的情况下,与地面真相数据非常吻合。我们设想这种新技术将扩大PINNS的有效性,以更现实的应用。
translated by 谷歌翻译
Deep Operator网络(DeepOnets)提供了一种功能强大的数据驱动工具,用于通过学习操作员(即无限维函数空间之间的地图)解决参数PDE。在这项工作中,我们在高维的贝叶斯逆问题的背景下采用了物理知识的deponets。传统的解决方案策略需要大量且不可行的远期模型求解以及参数衍生物的计算。为了启用有效的解决方案,我们通过采用RealnVP体系结构来扩展DepOnets,该体系结构在参数输入和分支网络输出之间产生可逆且可区分的映射。这使我们能够构建完整后部的准确近似值,无论观测数量和观察噪声的大小如何,都可以轻松适应。结果,不需要额外的远期解决方案,也不需要昂贵的采样程序。我们证明了基于抗衍生物,反应扩散和达西流动方程的反向问题的背景下,提出方法的功效和准确性。
translated by 谷歌翻译
物理知识的神经网络(PINN)已成为解决各种域中的部分微分方程(PDE)的强大工具。尽管PINNS的先前研究主要集中在训练期间构建和平衡损失功能以避免最小值,但采样搭配点对PINNS性能的影响很大程度上被忽略了。在这项工作中,我们发现PINN的性能可以随着不同的采样策略而显着变化,并且使用固定的搭配点可能对PINNS与正确解决方案的收敛性很小。特别是(1)我们假设对PINN的培训依赖于从初始和/或边界条件点到内部点的成功“传播”,而采样策略差的PINN可能会卡在琐事的解决方案上,如果有\ textit {传播失败}。 (2)我们证明,传播失败的特征是高度不平衡的PDE残留场,在非常狭窄的区域中观察到非常高的残留物。 (3)为了减轻传播失败,我们提出了一种新颖的\ textit {Evolutionary采样}(EVO)方法,该方法可以逐步积累高PDE残差区域中的搭配点。我们进一步提供EVO的扩展,以尊重因果关系原理,同时解决时间依赖性PDE。我们从经验上证明了我们提出的方法在各种PDE问题中的功效和效率。
translated by 谷歌翻译
功能空间中的监督学习是机器学习研究的一个新兴领域,并应用了复杂物理系统(例如流体流,固体力学和气候建模)的预测。通过直接学习无限尺寸函数空间之间的地图(运算符),这些模型能够学习目标函数的离散不变表示。一种常见的方法是将此类目标函数表示为从数据中学到的基础元素的线性组合。但是,在一个简单的方案中,即使目标函数形成低维的子手机,也需要大量的基础元素才能进行准确的线性表示。在这里,我们提出了Nomad,这是一个新型的操作员学习框架,该框架具有一个非线性解码器图,能够学习功能空间中非线性子手机的有限尺寸表示。我们表明,该方法能够准确地学习溶液歧管的低维表示,而偏微分方程的表现优于较大尺寸的线性模型。此外,我们将最先进的操作员学习方法进行比较,并在复杂的流体动力学基准上进行学习,并以明显较小的模型尺寸和训练成本实现竞争性能。
translated by 谷歌翻译